无数据知识蒸馏(DFKD)最近引起了人们的关注,这要归功于其在不使用培训数据的情况下将知识从教师网络转移到学生网络的吸引力。主要思想是使用发电机合成数据以培训学生。随着发电机的更新,合成数据的分布将发生变化。如果发电机和学生接受对手的训练,使学生忘记了先前一步获得的知识,则这种分配转换可能会很大。为了减轻这个问题,我们提出了一种简单而有效的方法,称为动量对抗蒸馏(MAD),该方法维持了发电机的指数移动平均值(EMA)副本,并使用发电机和EMA生成器的合成样品来培训学生。由于EMA发电机可以被视为发电机旧版本的合奏,并且与发电机相比,更新的更改通常会发生较小的变化,因此对其合成样本进行培训可以帮助学生回顾过去的知识,并防止学生适应太快的速度发电机的新更新。我们在六个基准数据集上进行的实验,包括ImageNet和Place365,表明MAD的性能优于竞争方法来处理大型分配转移问题。我们的方法还与现有的DFKD方法相比,甚至在某些情况下达到了最新的方法。
translated by 谷歌翻译
能够创建一个可以与人类就他们所观看的东西进行有意义的对话的系统,这将是一项技术壮举。针对该目标的设置作为视频对话任务表示,要求系统在正在进行的对话框中对问题产生自然话语。该任务带来了伟大的视觉,语言和推理挑战,如果没有适当的表示方案,可以轻松克服支持高级推理的视频和对话。为了应对这些挑战,我们提出了一个新的以对象为中心的视频对话框架,该框架支持神经推理称为成本 - 代表时空中有关对象的对话。在这里,视频中的动态时空视觉内容首先解析为对象轨迹。鉴于此视频抽象,成本维护并跟踪与对象相关的对话框状态,这些对话框在收到新问题后会更新。对象相互作用是动态和条件地推断出每个问题的,并且它们是它们之间关系推理的基础。成本还保留了以前答案的历史记录,这允许检索相关的以对象为中心的信息以丰富答案形成过程。然后,语言生产以逐步进行,进入当前话语,现有对话和当前问题的背景。我们评估了DSTC7和DSTC8基准的成本,证明了其对最先进的竞争力。
translated by 谷歌翻译
特洛伊木马对深度神经网络的攻击既危险又秘密。在过去的几年中,特洛伊木马的攻击从仅使用单个输入 - 不知不线的触发器和仅针对一个类别使用多个输入特异性触发器和定位多个类的类别。但是,特洛伊木马的防御尚未赶上这一发展。大多数防御方法仍然使对特洛伊木马触发器和目标类别的假设不足,因此,现代特洛伊木马的攻击很容易被规避。为了解决这个问题,我们提出了两种新颖的“过滤”防御措施,称为变分输入过滤(VIF)和对抗输入过滤(AIF),它们分别利用有损数据压缩和对抗性学习,以有效地纯化潜在的Trojan触发器,而无需在运行时间内触发潜在的Trojan触发器。对触发器/目标类的数量或触发器的输入依赖性属性做出假设。此外,我们还引入了一种称为“过滤 - 对抗性”(FTC)的新防御机制,该机制有助于避免通过“过滤”引起的清洁数据的分类准确性下降,并将其与VIF/AIF结合起来,从种类。广泛的实验结果和消融研究表明,我们提议的防御能力在减轻五次高级特洛伊木马攻击方面显着优于众所周知的基线防御能力,包括最近的两次最新一次,同时对少量训练数据和大型触发器非常强大。
translated by 谷歌翻译
通过回顾一封来自情节记忆的过去的经验,可以通过回忆过去的经验来实现钢筋学习的样本效率。我们提出了一种新的基于模型的轨迹的集体记忆,解决了集体控制的当前限制。我们的记忆估计轨迹值,指导代理人朝着良好的政策。基于内存构建,我们通过动态混合控制统一模型的基于动态和习惯学习来构建互补学习模型,进入单个架构。实验表明,我们的模型可以比各种环境中的其他强力加强学习代理更快,更好地学习,包括随机和非马尔可夫环境。
translated by 谷歌翻译
Q学习目标的乐观性质导致高度估计偏差,这是与标准$ Q-$学习相关的固有问题。这种偏差未能考虑低返回的可能性,特别是在风险方案中。然而,偏差的存在,无论是高估还是低估,不一定都不需要不可取。在本文中,我们分析了偏见学习的效用,并表明具体类型的偏差可能是优选的,这取决于场景。基于这一发现,我们设计了一种新颖的加强学习算法,平衡Q学习,其中将目标被修改为悲观和乐观术语的凸起组合,其相关权重分析地确定在线确定。我们在表格设置中证明了该算法的收敛,并经验证明了其在各种环境中的优越学习性能。
translated by 谷歌翻译
进程感知的推荐系统可以提供关键的决策支持功能,以帮助通过推荐接下来采取的操作来执行业务流程执行。基于深度学习领域的最近进步,我们介绍了一种基于新的内存增强神经网络(MANN)构建过程感知推荐系统。我们提出了一种新颖的网络架构,即写保护的双控制器存储器增强神经网络(DCW-MANN),用于构建规范模型。为了评估我们方法的可行性和有用性,我们考虑了三个现实世界数据集,并表明我们的方法在后缀推荐和下一个任务预测任务的几个基线上导致更好的性能。
translated by 谷歌翻译
In this paper, we propose a novel technique, namely INVALIDATOR, to automatically assess the correctness of APR-generated patches via semantic and syntactic reasoning. INVALIDATOR reasons about program semantic via program invariants while it also captures program syntax via language semantic learned from large code corpus using the pre-trained language model. Given a buggy program and the developer-patched program, INVALIDATOR infers likely invariants on both programs. Then, INVALIDATOR determines that a APR-generated patch overfits if: (1) it violates correct specifications or (2) maintains errors behaviors of the original buggy program. In case our approach fails to determine an overfitting patch based on invariants, INVALIDATOR utilizes a trained model from labeled patches to assess patch correctness based on program syntax. The benefit of INVALIDATOR is three-fold. First, INVALIDATOR is able to leverage both semantic and syntactic reasoning to enhance its discriminant capability. Second, INVALIDATOR does not require new test cases to be generated but instead only relies on the current test suite and uses invariant inference to generalize the behaviors of a program. Third, INVALIDATOR is fully automated. We have conducted our experiments on a dataset of 885 patches generated on real-world programs in Defects4J. Experiment results show that INVALIDATOR correctly classified 79% overfitting patches, accounting for 23% more overfitting patches being detected by the best baseline. INVALIDATOR also substantially outperforms the best baselines by 14% and 19% in terms of Accuracy and F-Measure, respectively.
translated by 谷歌翻译
Modern deep neural networks have achieved superhuman performance in tasks from image classification to game play. Surprisingly, these various complex systems with massive amounts of parameters exhibit the same remarkable structural properties in their last-layer features and classifiers across canonical datasets. This phenomenon is known as "Neural Collapse," and it was discovered empirically by Papyan et al. \cite{Papyan20}. Recent papers have theoretically shown the global solutions to the training network problem under a simplified "unconstrained feature model" exhibiting this phenomenon. We take a step further and prove the Neural Collapse occurrence for deep linear network for the popular mean squared error (MSE) and cross entropy (CE) loss. Furthermore, we extend our research to imbalanced data for MSE loss and present the first geometric analysis for Neural Collapse under this setting.
translated by 谷歌翻译
We present a Machine Learning (ML) study case to illustrate the challenges of clinical translation for a real-time AI-empowered echocardiography system with data of ICU patients in LMICs. Such ML case study includes data preparation, curation and labelling from 2D Ultrasound videos of 31 ICU patients in LMICs and model selection, validation and deployment of three thinner neural networks to classify apical four-chamber view. Results of the ML heuristics showed the promising implementation, validation and application of thinner networks to classify 4CV with limited datasets. We conclude this work mentioning the need for (a) datasets to improve diversity of demographics, diseases, and (b) the need of further investigations of thinner models to be run and implemented in low-cost hardware to be clinically translated in the ICU in LMICs. The code and other resources to reproduce this work are available at https://github.com/vital-ultrasound/ai-assisted-echocardiography-for-low-resource-countries.
translated by 谷歌翻译
Ensemble learning combines results from multiple machine learning models in order to provide a better and optimised predictive model with reduced bias, variance and improved predictions. However, in federated learning it is not feasible to apply centralised ensemble learning directly due to privacy concerns. Hence, a mechanism is required to combine results of local models to produce a global model. Most distributed consensus algorithms, such as Byzantine fault tolerance (BFT), do not normally perform well in such applications. This is because, in such methods predictions of some of the peers are disregarded, so a majority of peers can win without even considering other peers' decisions. Additionally, the confidence score of the result of each peer is not normally taken into account, although it is an important feature to consider for ensemble learning. Moreover, the problem of a tie event is often left un-addressed by methods such as BFT. To fill these research gaps, we propose PoSw (Proof of Swarm), a novel distributed consensus algorithm for ensemble learning in a federated setting, which was inspired by particle swarm based algorithms for solving optimisation problems. The proposed algorithm is theoretically proved to always converge in a relatively small number of steps and has mechanisms to resolve tie events while trying to achieve sub-optimum solutions. We experimentally validated the performance of the proposed algorithm using ECG classification as an example application in healthcare, showing that the ensemble learning model outperformed all local models and even the FL-based global model. To the best of our knowledge, the proposed algorithm is the first attempt to make consensus over the output results of distributed models trained using federated learning.
translated by 谷歌翻译